Training-Time Optimization of a Budgeted Booster

نویسندگان

  • Yi Huang
  • Brian Powers
  • Lev Reyzin
چکیده

We consider the problem of feature-efficient prediction – a setting where features have costs and the learner is limited by a budget constraint on the total cost of the features it can examine in test time. We focus on solving this problem with boosting by optimizing the choice of base learners in the training phase and stopping the boosting process when the learner’s budget runs out. We experimentally show that our method improves upon the boosting approach AdaBoostRS [Reyzin, 2011] and in many cases also outperforms the recent algorithm SpeedBoost [Grubb and Bagnell, 2012]. We provide a theoretical justication for our optimization method via the margin bound. We also experimentally show that our method outperforms pruned decision trees, a natural budgeted classifier.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

On Budgeted Optimization Problems

In this paper we give a general method to solve budgeted optimization problems in strongly polynomial time. The method can be applied to several known budgeted problems and in addition we show two new applications. The first one extends Frederickson’s and Solis-Oba’s result [10] to (poly)matroid intersections from single matroids. The second one is the budgeted version of the minimum cost circu...

متن کامل

Robust Combinatorial Optimization under Budgeted-Ellipsoidal Uncertainty∗

In the field of robust optimization uncertain data is modeled by uncertainty sets, i.e. sets which contain all relevant outcomes of the uncertain parameters. The complexity of the related robust problem depends strongly on the shape of the uncertainty set. Two popular classes of uncertainty are budgeted uncertainty and ellipsoidal uncertainty. In this paper we introduce a new uncertainty class ...

متن کامل

Robust combinatorial optimization with variable budgeted uncertainty

Abstract: We introduce a new model for robust combinatorial optimization where the uncertain parameters belong to the image of multifunctions of the problem variables. In particular, we study the variable budgeted uncertainty, an extension of the budgeted uncertainty introduced by Bertsimas and Sim. Variable budgeted uncertainty can provide the same probabilistic guarantee as the budgeted uncer...

متن کامل

Breaking the curse of kernelization: budgeted stochastic gradient descent for large-scale SVM training

Online algorithms that process one example at a time are advantageous when dealing with very large data or with data streams. Stochastic Gradient Descent (SGD) is such an algorithm and it is an attractive choice for online Support Vector Machine (SVM) training due to its simplicity and effectiveness. When equipped with kernel functions, similarly to other SVM learning algorithms, SGD is suscept...

متن کامل

Some Impossibility Results for Budgeted Learning

We prove two impossibility results for budgeted learning with linear predictors. The first result shows that no budgeted learning algorithm can in general learn an -accurate d-dimensional linear predictor while observing less than d/ attributes at training time. Our second result deals with the setting studied by Greiner et al. (2002), where the learner has all the information at training time ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2015